Probability density function Multivariate (bivariate) Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. |
|
parameters: | μ ∈ Rk — location Σ ∈ Rk×k — covariance (nonnegative-definite matrix) |
---|---|
support: | x ∈ span(Σ) ⊆ Rk |
pdf: | (pdf exists only for positive-definite Σ) |
cdf: | (no analytic expression) |
mean: | μ |
mode: | μ |
variance: | Σ |
entropy: | |
mgf: | |
cf: |
In probability theory and statistics, the multivariate normal distribution or multivariate Gaussian distribution, is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. A random vector is said to be multivariate normally distributed if every linear combination of its components has a univariate normal distribution.
The multivariate normal distribution of a k-dimensional random vector can be written in the following notation:
or to make it explicitly known that X is k-dimensional,
with k-dimensional mean vector
and k x k covariance matrix
A random vector X = (X1, …, Xk)′ is said to have the multivariate normal distribution if it satisfies the following equivalent conditions [1]:
The covariance matrix is allowed to be singular (in which case the corresponding distribution has no density). This case arises frequently in statistics; for example, in the distribution of the vector of residuals in the ordinary least squares regression. Note also that the Xi are in general not independent; they can be seen as the result of applying the matrix A to a collection of independent Gaussian variables Z.
In the 2-dimensional nonsingular case (k = rank(Σ) = 2), the probability density function of a vector [X Y]′ is
where ρ is the correlation between X and Y. In this case,
In the bivariate case, we also have a theorem that makes the first equivalent condition for multivariate normality less restrictive: it is sufficient to verify that countably many distinct linear combinations of X and Y are normal in order to conclude that the vector [X Y]′ is bivariate normal.[2]
The cumulative distribution function (cdf) F(x) of a random vector X is defined as the probability that all components of X are less than or equal to the corresponding values in the vector x. Though there is no closed form for F(x), there are a number of algorithms that estimate it numerically. For example, see MVNDST under [2] (includes FORTRAN code) or [3] (includes MATLAB code).
If X and Y are normally distributed and independent, this implies they are "jointly normally distributed", i.e., the pair (X, Y) must have bivariate normal distribution. However, a pair of jointly normally distributed variables need not be independent.
The fact that two random variables X and Y both have a normal distribution does not imply that the pair (X, Y) has a joint normal distribution. A simple example is one in which X has a normal distribution with expected value 0 and variance 1, and Y = X if |X| > c and Y = −X if |X| < c, where c is about 1.54. There are similar counterexamples for more than two random variables.
If and are partitioned as follows
then the distribution of conditional on is multivariate normal where
and covariance matrix
This matrix is the Schur complement of in . This means that to calculate the conditional covariance matrix, one inverts the overall covariance matrix, drops the rows and columns corresponding to the variables being conditioned upon, and then inverts back to get the conditional covariance matrix.
Note that knowing that alters the variance, though the new variance does not depend on the specific value of ; perhaps more surprisingly, the mean is shifted by ; compare this with the situation of not knowing the value of , in which case would have distribution .
The matrix is known as the matrix of regression coefficients.
In the bivariate case the conditional distribution of Y given X is
In the case
then
where this latter ratio is often called the inverse Mills ratio.
To obtain the marginal distribution over a subset of multivariate normal random variables, one only needs to drop the irrelevant variables (the variables that one wants to marginalize out) from the mean vector and the covariance matrix. The proof for this follows from the definitions of multivariate normal distributions and some advanced linear algebra [3].
Example
Let be multivariate normal random variables with mean vector and covariance matrix (Standard parametrization for multivariate normal distribution). Then the joint distribution of is multivariate normal with mean vector and covariance matrix
If is an affine transformation of where is an vector of constants and is a constant matrix, then has a multivariate normal distribution with expected value and variance i.e., . In particular, any subset of the has a marginal distribution that is also multivariate normal. To see this, consider the following example: to extract the subset , use
which extracts the desired elements directly.
Another corollary is that the distribution of , where is a constant vector of the same length as and the dot indicates a vector product, is univariate Gaussian with . This result follows by using
and considering only the first component of the product (the first row of is the vector ). Observe how the positive-definiteness of implies that the variance of the dot product must be positive.
An affine transformation of such as is not the same as the sum of two independent realisations of .
The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean[4]. The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix . The squared relative lengths of the principal axes are given by the corresponding eigenvalues.
If is an eigendecomposition where the columns of U are unit eigenvectors and is a diagonal matrix of the eigenvalues, then we have
Moreover, U can be chosen to be a rotation matrix, as inverting an axis does not have any effect on , but inverting a column changes the sign of U's determinant. The distribution is in effect scaled by , rotated by U and translated by .
Conversely, any choice of , full rank matrix U, and positive diagonal entries yields a non-singular multivariate normal distribution. If any is zero and U is square, the resulting covariance matrix is singular. Geometrically this means that every contour ellipsoid is infinitely thin and has zero volume in n-dimensional space, as at least one of the principal axes has length of zero.
In general, random variables may be uncorrelated but highly dependent. But if a random vector has a multivariate normal distribution then any two or more of its components that are uncorrelated are independent. This implies that any two or more of its components that are pairwise independent are independent.
But it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent. Two random variables that are normally distributed may fail to be jointly normally distributed, i.e., the vector whose components they are may fail to have a multivariate normal distribution. For an example of two normally distributed random variables that are uncorrelated but not independent, see normally distributed and uncorrelated does not imply independent.
The kth-order moments of X are defined by
where
The central k-order central moments are given as follows
(a) If k is odd, .
(b) If k is even with , then
where the sum is taken over all allocations of the set into (unordered) pairs. That is, if you have a kth () central moment, you will be summing the products of covariances (the - notation has been dropped in the interests of parsimony):
This yields terms in the sum (15 in the above case), each being the product of (in this case 3) covariances. For fourth order moments (four variables) there are three terms. For sixth-order moments there are 3 × 5 = 15 terms, and for eighth-order moments there are 3 × 5 × 7 = 105 terms.
The covariances are then determined by replacing the terms of the list by the corresponding terms of the list consisting of ones, then twos, etc... To illustrate this, examine the following 4th-order central moment case:
where is the covariance of and . The idea with the above method is you first find the general case for a th moment where you have different variables - and then you can simplify this accordingly. Say, you have then you simply let and realise that .
The Kullback–Leibler divergence from to , for non-singular matrices and , is:
The logarithm must be taken to base e since the two terms following the logarithm are themselves base-e logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats. Dividing the entire expression above by loge 2 yields the divergence in bits.
The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is perhaps surprisingly subtle and elegant. See estimation of covariance matrices.
In short, the probability density function (pdf) of an N-dimensional multivariate normal is
and the ML estimator of the covariance matrix from a sample of n observations is
which is simply the sample covariance matrix. This is a biased estimator whose expectation is
An unbiased sample covariance is
The Fisher information matrix for estimating the parameters of a multivariate normal distribution has a closed form expression. This can be used, for example, to compute the Cramer-Rao bound for parameter estimation in this setting. See Fisher information#Multivariate normal distribution for more details.
The differential entropy of the multivariate normal distribution is [6]
where is the determinant of the covariance matrix .
Multivariate normality tests check a given set of data for similarity to the multivariate normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small p-value indicates non-normal data. Multivariate normality tests include the Cox-Small test [7] and Smith and Jain's adaptation [8] of the Friedman-Rafsky test.[9]
A widely used method for drawing a random vector from the -dimensional multivariate normal distribution with mean vector and covariance matrix (required to be symmetric and positive-definite) works as follows:
Hamedani, G. G.; Tata, M. N. (1975). "On the determination of the bivariate normal distribution from distributions of linear combinations of the variables". The American Mathematical Monthly (The American Mathematical Monthly, Vol. 82, No. 9) 82 (9): 913–915. doi:10.2307/2318494. JSTOR 2318494. http://jstor.org/stable/2318494.
|